influence graph
Influence branching for learning to solve mixed-integer programs online
Strang, Paul, Alès, Zacharie, Bissuel, Côme, Juan, Olivier, Kedad-Sidhoum, Safia, Rachelson, Emmanuel
On the occasion of the 20th Mixed Integer Program Workshop's computational competition, this work introduces a new approach for learning to solve MIPs online. Influence branching, a new graph-oriented variable selection strategy, is applied throughout the first iterations of the branch and bound algorithm. This branching heuristic is optimized online with Thompson sampling, which ranks the best graph representations of MIP's structure according to computational speed up over SCIP. We achieve results comparable to state of the art online learning methods. Moreover, our results indicate that our method generalizes well to more general online frameworks, where variations in constraint matrix, constraint vector and objective coefficients can all occur and where more samples are available.
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
The Sound of Silence in Social Networks
Aranda, Jesús, Díaz, Juan Francisco, Gaona, David, Valencia, Frank
We generalize the classic multi-agent DeGroot model for opinion dynamics to incorporate the Spiral of Silence theory from political science. This theory states that individuals may withhold their opinions when they perceive them to be in the minority. As in the DeGroot model, a community of agents is represented as a weighted directed graph whose edges indicate how much agents influence one another. However, agents whose current opinions are in the minority become silent (i.e., they do not express their opinion). Two models for opinion update are then introduced. In the memoryless opinion model ($\mbox{SOM}^-$), agents update their opinion by taking the weighted average of their non-silent neighbors' opinions. In the memory based opinion model ($\mbox{SOM}^+$), agents update their opinions by taking the weighted average of the opinions of all their neighbors, but for silent neighbors, their most recent opinion is considered. We show that for $\mbox{SOM}^-$ convergence to consensus is guaranteed for clique graphs but, unlike for the classic DeGroot, not guaranteed for strongly-connected aperiodic graphs. In contrast, we show that for $\mbox{SOM}^+$ convergence to consensus is not guaranteed even for clique graphs. We showcase our models through simulations offering experimental insights that align with key aspects of the Spiral of Silence theory. These findings reveal the impact of silence dynamics on opinion formation and highlight the limitations of consensus in more nuanced social models.
- South America > Colombia > Valle del Cauca Department > Cali (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- North America > United States > California > Santa Clara County > Stanford (0.04)
- (3 more...)
Learning the Influence Graph of a High-Dimensional Markov Process with Memory
Bagewadi, Smita, Chatterjee, Avhishek
Motivated by multiple applications in social networks, nervous systems, and financial risk analysis, we consider the problem of learning the underlying (directed) influence graph or causal graph of a high-dimensional multivariate discrete-time Markov process with memory. At any discrete time instant, each observed variable of the multivariate process is a binary string of random length, which is parameterized by an unobservable or hidden [0,1]-valued scalar. The hidden scalars corresponding to the variables evolve according to discrete-time linear stochastic dynamics dictated by the underlying influence graph whose nodes are the variables. We extend an existing algorithm for learning i.i.d. graphical models to this Markovian setting with memory and prove that it can learn the influence graph based on the binary observations using logarithmic (in number of variables or nodes) samples when the degree of the influence graph is bounded. The crucial analytical contribution of this work is the derivation of the sample complexity result by upper and lower bounding the rate of convergence of the observed Markov process with memory to its stationary distribution in terms of the parameters of the influence graph.
- North America > United States (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > India > Tamil Nadu > Chennai (0.04)
A General Framework for Defending Against Backdoor Attacks via Influence Graph
Sun, Xiaofei, Li, Jiwei, Li, Xiaoya, Wang, Ziyao, Zhang, Tianwei, Qiu, Han, Wu, Fei, Fan, Chun
We introduce the notion of the influence graph, which consists of nodes and edges respectively representative of individual training points and associated pair-wise influences. The influence between a pair of training points represents the impact of removing one training point on the prediction of another, approximated by the influence function (Koh & Liang, 2017). Malicious training points are extracted by finding the maximum average sub-graph subject to a particular size. Extensive experiments on computer vision and natural language processing tasks demonstrate the effectiveness and generality of the proposed framework. Deep neural models are susceptible to backdoor attacks, which hack the model by injecting triggers to the input and alters the output to a target label (Gu et al., 2017; Chen et al., 2017; 2020). Consequently, the model will behave normally on clean data, but make incorrect predictions when encountering attacked data embedded with hidden triggers.
- Workflow (0.46)
- Research Report (0.40)
Could you give me a hint? Generating inference graphs for defeasible reasoning
Madaan, Aman, Rajagopal, Dheeraj, Tandon, Niket, Yang, Yiming, Hovy, Eduard
Defeasible reasoning is the mode of reasoning where conclusions can be overturned by taking into account new evidence. A commonly used method in philosophy and AI literature is to handcraft argumentation supporting inference graphs. While humans find inference graphs very useful for reasoning, constructing them at scale is difficult. In this paper, we automatically generate such inference graphs through transfer learning from another NLP task that shares the kind of reasoning that inference graphs support. Through automated metrics and human evaluation, we find that our method generates meaningful graphs for the defeasible inference task. Human accuracy on this task improves by 20% by consulting the generated graphs. Our findings open up exciting new research avenues for cases where machine reasoning can help human reasoning. (A dataset of 230,000 influence graphs for each defeasible query is located at: https://tinyurl.com/defeasiblegraphs.)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- Asia > Middle East > Israel (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- Government (0.94)
- Law > Statutes (0.46)
Synthesis of Boolean Networks from Biological Dynamical Constraints using Answer-Set Programming
Chevalier, Stéphanie, Froidevaux, Christine, Paulevé, Loïc, Zinovyev, Andrei
The state of each component is determined by a Boolean function of the state of (a subset of) the components of the network. This paper addresses the synthesis of these Boolean functions from constraints on their domain and emerging dynamical properties of the resulting network. The dynamical properties relate to the existence and absence of trajectories between partially observed configurations, and to the stable behaviours (fixpoints and cyclic attractors). The synthesis is expressed as a Boolean satisfiability problem relying on Answer-Set Programming with a parametrized complexity, and leads to a complete non-redundant characterization of the set of solutions. Considered constraints are particularly suited to address the synthesis of models of cellular differentiation processes, as illustrated on a case study. The scalability of the approach is demonstrated on random networks with scale-free structures up to 100 to 1,000 nodes depending on the type of constraints.
- Europe > France (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
WIQA: A dataset for "What if..." reasoning over procedural text
Tandon, Niket, Mishra, Bhavana Dalvi, Sakaguchi, Keisuke, Bosselut, Antoine, Clark, Peter
We introduce WIQA, the first large-scale dataset of "What if..." questions over procedural text. WIQA contains three parts: a collection of paragraphs each describing a process, e.g., beach erosion; a set of crowdsourced influence graphs for each paragraph, describing how one change a ffects another; and a large (40k) collection of "What if...?" multiple-choice questions derived from the graphs. For example, given a paragraph about beach erosion, would stormy weather result in more or less erosion (or have no e ff ect)? The task is to answer the questions, given their associated paragraph. WIQA contains three kinds of questions: perturbations to steps mentioned in the paragraph; external (out-of-paragraph) perturbations requiring commonsense knowledge; and irrelevant (no e ff ect) perturbations. We find that state-of-the-art models achieve 73.8% accuracy, well below the human performance of 96.3%. We analyze the challenges, in particular tracking chains of influences, and present the dataset as an open challenge to the community.
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Communications > Social Media > Crowdsourcing (0.89)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Qualitative Reasoning (0.68)
Privacy Risks of Explaining Machine Learning Models
Shokri, Reza, Strobel, Martin, Zick, Yair
Can we trust black-box machine learning with its decisions? Can we trust algorithms to train machine learning models on sensitive data? Transparency and privacy are two fundamental elements of trust for adopting machine learning. In this paper, we investigate the relation between interpretability and privacy. In particular we analyze if an adversary can exploit transparent machine learning to infer sensitive information about its training set. To this end, we perform membership inference as well as reconstruction attacks on two popular classes of algorithms for explaining machine learning models: feature-based and record-based influence measures. We empirically show that an attacker, that only observes the feature-based explanations, has the same power as the state of the art membership inference attacks on model predictions. We also demonstrate that record-based explanations can be effectively exploited to reconstruct significant parts of the training set. Finally, our results indicate that minorities and special cases are more vulnerable to these type of attacks than majority groups.
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (0.94)
Understanding Agent Incentives using Causal Influence Diagrams. Part I: Single Action Settings
Everitt, Tom, Ortega, Pedro A., Barnes, Elizabeth, Legg, Shane
Agents are systems that optimize an objective function in an environment. Together, the goal and the environment induce secondary objectives, incentives. Modeling the agent-environment interaction in graphical models called influence diagrams, we can answer two fundamental questions about an agent's incentives directly from the graph: (1) which nodes is the agent incentivized to observe, and (2) which nodes is the agent incentivized to influence? The answers tell us which information and influence points need extra protection. For example, we may want a classifier for job applications to not use the ethnicity of the candidate, and a reinforcement learning agent not to take direct control of its reward mechanism. Different algorithms and training paradigms can lead to different influence diagrams, so our method can be used to identify algorithms with problematic incentives and help in designing algorithms with better incentives.
Large-scale power grid hierarchical segmentation based on power-flow affinities
Marot, Antoine, Tazi, Sami, Donnot, Benjamin, Panciatici, Patrick
The segmentation of large scale power grids into zones allows a better understanding of its structure, as the control room operators will naturally but manually do for any study. In this paper we provide a new automatic hierarchical method based on the community detection algorithm \textit{Infomap}. Our main contribution is to offer as input a new representation of the power grid, called the security analysis, that represents power flow affinities beyond the connectivity of the grid, a point that will become even more relevant for tomorrow's cyber-physical system. Indeed we already discover few relevant and important clusters that are not connected in the actual grid topology. To better describe and investigate the method, we apply it here on the well-studied IEEE-RTS-96 and IEEE-118. We further applied our method on the large-scale French Power Grid which showed promising results given its puzzling resemblance with the historical RTE regional segmentation.